Hierarchical text classification aims to leverage label hierarchy in multi-label text classification. Existing methods encode label hierarchy in a global view, where label hierarchy is treated as the static hierarchical structure containing all labels. Since global hierarchy is static and irrelevant to text samples, it makes these methods hard to exploit hierarchical information. Contrary to global hierarchy, local hierarchy as a structured labels hierarchy corresponding to each text sample. It is dynamic and relevant to text samples, which is ignored in previous methods. To exploit global and local hierarchies,we propose Hierarchy-guided BERT with Global and Local hierarchies (HBGL), which utilizes the large-scale parameters and prior language knowledge of BERT to model both global and local hierarchies.Moreover,HBGL avoids the intentional fusion of semantic and hierarchical modules by directly modeling semantic and hierarchical information with BERT.Compared with the state-of-the-art method HGCLR,our method achieves significant improvement on three benchmark datasets.
translated by 谷歌翻译
This report introduces the technical details of the team FuXi-Fresher for LVIS Challenge 2021. Our method focuses on the problem in following two aspects: the long-tail distribution and the segmentation quality of mask and boundary. Based on the advanced HTC instance segmentation algorithm, we connect transformer backbone(Swin-L) through composite connections inspired by CBNetv2 to enhance the baseline results. To alleviate the problem of long-tail distribution, we design a Distribution Balanced method which includes dataset balanced and loss function balaced modules. Further, we use a Mask and Boundary Refinement method composed with mask scoring and refine-mask algorithms to improve the segmentation quality. In addition, we are pleasantly surprised to find that early stopping combined with EMA method can achieve a great improvement. Finally, by using multi-scale testing and increasing the upper limit of the number of objects detected per image, we achieved more than 45.4% boundary AP on the val set of LVIS Challenge 2021. On the test data of LVIS Challenge 2021, we rank 1st and achieve 48.1% AP. Notably, our APr 47.5% is very closed to the APf 48.0%. * indicates equal contribution.
translated by 谷歌翻译
红外小目标检测是红外系统中的重要基本任务。因此,已经提出了许多红外小目标检测方法,其中低级模型已被用作强大的工具。然而,基于低级别的方法为不同的奇异值分配相同的权重,这将导致背景估计不准确。考虑到不同的奇异值具有不同的重要性,并且应判别处理,本文提出了一种用于红外小目标检测的非凸张力低秩近似(NTLA)方法。在我们的方法中,NTLA正则化将不同的权重自适应分配给不同的奇异值以进行准确背景估计。基于所提出的NTLA,我们提出了不对称的空间 - 时间总变化(ASTTV)正则化,以实现复杂场景中的更准确的背景估计。与传统的总变化方法相比,ASTTV利用不同的平滑度强度进行空间和时间正则化。我们设计了一种有效的算法来查找我们方法的最佳解决方案。与一些最先进的方法相比,所提出的方法达到各种评估指标的改进。各种复杂场景的广泛实验结果表明,我们的方法具有强大的鲁棒性和低误报率。代码可在https://github.com/liuting20a/asttv-ntla获得。
translated by 谷歌翻译
对人类姿势和行动的认可对于自治系统与人们顺利互动。然而,相机通常在2D中捕获人类的姿势,作为图像和视频,这在跨越识别任务具有挑战性的观点来具有显着的外观变化。为了解决这个问题,我们探讨了来自2D信息的3D人体姿势中的识别相似性,在现有工作中没有得到很好地研究。在这里,我们提出了一种从2D主体关节键盘学习紧凑型视图 - 不变的嵌入空间的方法,而不明确地预测3D姿势。通过确定性映射难以代表预测和遮挡的2D姿势的输入模糊,因此我们采用了嵌入空间的概率制定。实验结果表明,与3D姿态估计模型相比,我们的嵌入模型在不同相机视图中检索类似的姿势时达到更高的准确性。我们还表明,通过培训简单的时间嵌入模型,我们在姿势序列检索方面取得了卓越的性能,并大大减少了基于堆叠帧的嵌入式的嵌入维度,以实现高效的大规模检索。此外,为了使我们的嵌入能够使用部分可见的输入,我们进一步调查培训期间的不同关键点遮挡增强策略。我们证明这些遮挡增强显着提高了部分2D输入姿势的检索性能。行动识别和视频对齐的结果表明,使用我们的嵌入没有任何额外培训,可以实现相对于每个任务专门培训的其他模型的竞争性能。
translated by 谷歌翻译
Existing measures and representations for trajectories have two longstanding fundamental shortcomings, i.e., they are computationally expensive and they can not guarantee the `uniqueness' property of a distance function: dist(X,Y) = 0 if and only if X=Y, where $X$ and $Y$ are two trajectories. This paper proposes a simple yet powerful way to represent trajectories and measure the similarity between two trajectories using a distributional kernel to address these shortcomings. It is a principled approach based on kernel mean embedding which has a strong theoretical underpinning. It has three distinctive features in comparison with existing approaches. (1) A distributional kernel is used for the very first time for trajectory representation and similarity measurement. (2) It does not rely on point-to-point distances which are used in most existing distances for trajectories. (3) It requires no learning, unlike existing learning and deep learning approaches. We show the generality of this new approach in three applications: (a) trajectory anomaly detection, (b) anomalous sub-trajectory detection, and (c) trajectory pattern mining. We identify that the distributional kernel has (i) a unique data-dependent property and the above uniqueness property which are the key factors that lead to its superior task-specific performance; and (ii) runtime orders of magnitude faster than existing distance measures.
translated by 谷歌翻译
Detecting abrupt changes in data distribution is one of the most significant tasks in streaming data analysis. Although many unsupervised Change-Point Detection (CPD) methods have been proposed recently to identify those changes, they still suffer from missing subtle changes, poor scalability, or/and sensitive to noise points. To meet these challenges, we are the first to generalise the CPD problem as a special case of the Change-Interval Detection (CID) problem. Then we propose a CID method, named iCID, based on a recent Isolation Distributional Kernel (IDK). iCID identifies the change interval if there is a high dissimilarity score between two non-homogeneous temporal adjacent intervals. The data-dependent property and finite feature map of IDK enabled iCID to efficiently identify various types of change points in data streams with the tolerance of noise points. Moreover, the proposed online and offline versions of iCID have the ability to optimise key parameter settings. The effectiveness and efficiency of iCID have been systematically verified on both synthetic and real-world datasets.
translated by 谷歌翻译
Semantic segmentation works on the computer vision algorithm for assigning each pixel of an image into a class. The task of semantic segmentation should be performed with both accuracy and efficiency. Most of the existing deep FCNs yield to heavy computations and these networks are very power hungry, unsuitable for real-time applications on portable devices. This project analyzes current semantic segmentation models to explore the feasibility of applying these models for emergency response during catastrophic events. We compare the performance of real-time semantic segmentation models with non-real-time counterparts constrained by aerial images under oppositional settings. Furthermore, we train several models on the Flood-Net dataset, containing UAV images captured after Hurricane Harvey, and benchmark their execution on special classes such as flooded buildings vs. non-flooded buildings or flooded roads vs. non-flooded roads. In this project, we developed a real-time UNet based model and deployed that network on Jetson AGX Xavier module.
translated by 谷歌翻译
We present the Recurrent Interface Network (RIN), a neural net architecture that allocates computation adaptively to the input according to the distribution of information, allowing it to scale to iterative generation of high-dimensional data. Hidden units of RINs are partitioned into the interface, which is locally connected to inputs, and latents, which are decoupled from inputs and can exchange information globally. The RIN block selectively reads from the interface into latents for high-capacity processing, with incremental updates written back to the interface. Stacking multiple blocks enables effective routing across local and global levels. While routing adds overhead, the cost can be amortized in recurrent computation settings where inputs change gradually while more global context persists, such as iterative generation using diffusion models. To this end, we propose a latent self-conditioning technique that "warm-starts" the latents at each iteration of the generation process. When applied to diffusion models operating directly on pixels, RINs yield state-of-the-art image and video generation without cascades or guidance, while being domain-agnostic and up to 10$\times$ more efficient compared to specialized 2D and 3D U-Nets.
translated by 谷歌翻译
This paper describes a Relevance-Zone pattern table (RZT) that can be used to replace a traditional transposition table. An RZT stores exact game values for patterns that are discovered during a Relevance-Zone-Based Search (RZS), which is the current state-of-the-art in solving L&D problems in Go. Positions that share the same pattern can reuse the same exact game value in the RZT. The pattern matching scheme for RZTs is implemented using a radix tree, taking into consideration patterns with different shapes. To improve the efficiency of table lookups, we designed a heuristic that prevents redundant lookups. The heuristic can safely skip previously queried patterns for a given position, reducing the overhead to 10% of the original cost. We also analyze the time complexity of the RZT both theoretically and empirically. Experiments show the overhead of traversing the radix tree in practice during lookup remain flat logarithmically in relation to the number of entries stored in the table. Experiments also show that the use of an RZT instead of a traditional transposition table significantly reduces the number of searched nodes on two data sets of 7x7 and 19x19 L&D Go problems.
translated by 谷歌翻译
Continuous formulations of trajectory planning problems have two main benefits. First, constraints are guaranteed to be satisfied at all times. Secondly, dynamic obstacles can be naturally considered with time. This paper introduces a novel B-spline based trajectory optimization method for multi-jointed robots that provides a continuous trajectory with guaranteed continuous constraints satisfaction. At the core of this method, B-spline basic operations, like addition, multiplication, and derivative, are rigorously defined and applied for problem formulation. B-spline unique characteristics, such as the convex hull and smooth curves properties, are utilized to reformulate the original continuous optimization problem into a finite-dimensional problem. Collision avoidance with static obstacles is achieved using the signed distance field, while that with dynamic obstacles is accomplished via constructing time-varying separating hyperplanes. Simulation results on various robots validate the effectiveness of the algorithm. In addition, this paper provides experimental validations with a 6-link FANUC robot avoiding static and moving obstacles.
translated by 谷歌翻译